Generative vs Discriminative Models
Contents
Generative vs Discriminative Models#
In this post we are going to explore the differences between generative and discriminative models. In many machine learning tasks we assume that the output \(y\) can be generate from some function \(f(x)\), which is dependant on the input. We assume that we can model this function \(f\) using the conditional probability \(P(y|x)\).
Now there are two fundamentally different approaches to estimating \(P(y|x)\):
Generative models#
A generative model calculates the joint probability density \(P(x, y)\), which can also be used to generate new data pairs \((x, y)\) or to discriminate using Bayes’ rule:
Examples for generative models include: linear discriminant analysis and naive Bayes’ classifier.
Discriminative models#
A discriminative model on the other hand only “learns” the posterior probability \(P(y|x)\) and is not able to generate new data.
Examples for discriminative models are: logistic regression, support vector machines and decision trees.
More generally we can say that a generative model learns the distribution of the classes, so that we can generate new instances, while a discriminative model learns the decision boundary between the classes.